Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VReplication: Improve handling of vplayer stalls #15797

Merged
merged 40 commits into from
Jun 14, 2024

Conversation

mattlord
Copy link
Contributor

@mattlord mattlord commented Apr 25, 2024

Description

Please see the issue for details about the problems we're attempting to solve/improve in this PR.

The improvements here are about detecting when we're not making any progress in the vplayer (running/replicating phase of a vreplication workflow) and showing/logging meaningful errors to replace the eventual generic EOF errors seen before this PR:

vttablet: rpc error: code = Unknown desc = stream (at source tablet) error @ 1cb76134-1dc9-11ef-82c4-5f4020e10986:1-537840: unexpected EOF\nio.ReadFull(packet body of length 2658) failed (errno 2013) (sqlstate HY000)

It's quite some time before that error surfaces when in this stalled state and when in the state leading up to this error you cannot even stop the workflow either as things have backed up so badly (the stop command hangs for some time before eventually completing, see here).

With this PR, you would now by default get this helpful error instead every 5 minutes (in the vreplication workflow message field, in the _vt.vreplication_log table, and in the vttablet logs):

error in stream 3, will retry after 5s: relay log I/O stalled: progress stalled; vplayer was unable to replicate the transaction in a timely manner; examine the target mysqld instance health and the replicated queries' EXPLAIN output to see why queries are taking unusually long

And you can stop the workflow during this time as we're not letting things back up so badly.

What the PR now does is detect stalls on both "ends" in the vplayer:

  1. It will detect relay log I/O stalls — where we cannot successfully replicate the previous relay log contents, continually retrying to, so the relay log fills up with the next batch and we cannot write to it anymore as we hit the max relay log size / items. This a stall on the input side of the vplayer.
  2. It will detect heartbeat recording failures — where we cannot record the heartbeat in the vreplication workflow record within the minimum heartbeat interval. This is a stall on the output side of the vplayer.

Related Issue(s)

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation is not needed

Copy link
Contributor

vitess-bot bot commented Apr 25, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Apr 25, 2024
@mattlord mattlord added Component: VReplication Type: Bug and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Apr 25, 2024
@github-actions github-actions bot added this to the v20.0.0 milestone Apr 25, 2024
@mattlord mattlord force-pushed the vplayer_batch_trx_timeout branch 2 times, most recently from fe9ed6d to 03a5b03 Compare April 25, 2024 16:03
@mattlord mattlord force-pushed the vplayer_batch_trx_timeout branch from 03a5b03 to bee30d4 Compare April 25, 2024 16:09
Copy link

codecov bot commented Apr 25, 2024

Codecov Report

Attention: Patch coverage is 85.29412% with 5 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@a16a030). Learn more about missing BASE report.

Files Patch % Lines
.../vt/vttablet/tabletmanager/vreplication/vplayer.go 61.53% 5 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main   #15797   +/-   ##
=======================================
  Coverage        ?   68.56%           
=======================================
  Files           ?     1544           
  Lines           ?   197779           
  Branches        ?        0           
=======================================
  Hits            ?   135608           
  Misses          ?    62171           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mattlord mattlord added Type: Enhancement Logical improvement (somewhere between a bug and feature) and removed Type: Bug labels May 16, 2024
@mattlord mattlord force-pushed the vplayer_batch_trx_timeout branch from cb5fc2e to b78059d Compare May 17, 2024 00:31
@mattlord mattlord force-pushed the vplayer_batch_trx_timeout branch from b78059d to d4517f2 Compare May 17, 2024 01:27
@mattlord mattlord force-pushed the vplayer_batch_trx_timeout branch from 3ebfc1d to c666b14 Compare May 28, 2024 16:48
@mattlord
Copy link
Contributor Author

@rohit-nayak-ps and @shlomi-noach this removes all of the stallHandler related code: c666b14

What we lose w/o it is the ability to perform out-of-band monitoring and errors. Meaning that the heartbeat method will only detect a stall when it was due to a failure to commit the transaction which updates the timestamp for the workflow (whether it was done on its own or as part of replicating user generated events).

If you both prefer that then I'll update the PR description accordingly. Thanks!

Copy link
Contributor

@shlomi-noach shlomi-noach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattlord thank you, I think we should go with this change, seeing that it's so succinct.

@rohit-nayak-ps
Copy link
Contributor

@mattlord thank you, I think we should go with this change, seeing that it's so succinct.

Same here. Thanks @mattlord.

@mattlord
Copy link
Contributor Author

mattlord commented May 29, 2024

Good news is that I can repeat the exact scenario and symptoms seen in production with the Test case which demonstrates a more typical scenario that would be seen in production here: #15974

Bad news is that neither of the two methods added here are generating the helpful error message. I'm still getting:

                {
                  "id": "21",
                  "stream_id": "1",
                  "type": "Message",
                  "state": "Running",
                  "created_at": {
                    "seconds": "1716998552",
                    "nanoseconds": 0
                  },
                  "updated_at": {
                    "seconds": "1716998552",
                    "nanoseconds": 0
                  },
                  "message": "vttablet: rpc error: code = Unknown desc = stream (at source tablet) error @ 1cb76134-1dc9-11ef-82c4-5f4020e10986:1-546395: unexpected EOF\nio.ReadFull(packet body of length 78) failed (errno 2013) (sqlstate HY000)",
                  "count": "1"
                }

So I need to keep digging...

@mattlord
Copy link
Contributor Author

@rohit-nayak-ps and @shlomi-noach OK, now that I'm able to repeat the exact issue seen in production (which kicked off this work) — I was missing a detail in that test case which I only realized this week — I know that I've now fixed it with the latest commit.

Before that commit, even with the stallHandler related code in place, I was still only getting the eventual not-terribly-helpful error that had been seen in production:

# from the 'vtctldclient workflow show' output
"message": "vttablet: rpc error: code = Unknown desc = stream (at source tablet) error @ 1cb76134-1dc9-11ef-82c4-5f4020e10986:1-537840: unexpected EOF\nio.ReadFull(packet body of length 2658) failed (errno 2013) (sqlstate HY000)",

❯ grep -Ri stalled ${VTDATAROOT}/tmp/vttablet*ERROR*

It's quite some time before that error surfaces and when in the state leading up to it you cannot even stop or show the workflow either as things have backed up so badly.

With the latest commit, however (vplayer-progress-deadline=5m and we are in fact not making any progress — the position in the workflow is not moving forward):

 # from the 'vtctldclient workflow show' output
"message": "relay log I/O stalled: progress stalled; vplayer was unable to replicate the transaction in a timely manner; examine the target mysqld instance health and the replicated queries' EXPLAIN output to see why queries are taking unusually long",

❯ grep -Ri stalled ${VTDATAROOT}/tmp/vttablet*ERROR*
E0530 19:24:37.570804   71715 dbclient.go:128] error in stream 3, will retry after 5s: relay log I/O stalled: progress stalled; vplayer was unable to replicate the transaction in a timely manner; examine the target mysqld instance health and the replicated queries' EXPLAIN output to see why queries are taking unusually long
E0530 19:31:04.982780   71715 dbclient.go:128] error in stream 3, will retry after 5s: relay log I/O stalled: progress stalled; vplayer was unable to replicate the transaction in a timely manner; examine the target mysqld instance health and the replicated queries' EXPLAIN output to see why queries are taking unusually long
E0530 19:37:34.468556   71715 dbclient.go:128] error in stream 3, will retry after 5s: relay log I/O stalled: progress stalled; vplayer was unable to replicate the transaction in a timely manner; examine the target mysqld instance health and the replicated queries' EXPLAIN output to see why queries are taking unusually long
...

And you can start/stop/show the workflow during this time as we're not letting things back up so badly.

What the PR now does is detect stalls on both "ends" in the vplayer:

  1. It will detect relay log I/O stalls — where we cannot successfully replicate the previous relay log contents, continually retrying to, so the relay log fills up with the next batch and we cannot write to it anymore as we hit the max relay log size / items. This a stall on the input side of the vplayer.
  2. It will detect heartbeat recording failures — where we cannot record the heartbeat in the workflow record within the minimum heartbeat interval. This is a stall on the output side of the vplayer.

I will now update the PR description as well. I will also add the Do Not Merge label on the PR since we should not merge this until we've cut the v20 RC and main becomes v21. Given that, no rush on the reviews but I appreciate it whenever you can. 🙏

Signed-off-by: Matt Lord <[email protected]>
@mattlord mattlord removed this from the v20.0.0 milestone May 31, 2024
@rohit-nayak-ps
Copy link
Contributor

It says this in the description:

It's quite some time before that error surfaces when in this stalled state and when in the state leading up to this error you cannot even stop or show the workflow either as things have backed up so badly.

What causes show to hang?

Copy link
Contributor

@rohit-nayak-ps rohit-nayak-ps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

@mattlord
Copy link
Contributor Author

mattlord commented May 31, 2024

It says this in the description:

It's quite some time before that error surfaces when in this stalled state and when in the state leading up to this error you cannot even stop or show the workflow either as things have backed up so badly.

What causes show to hang?

@rohit-nayak-ps I'm sorry, I was wrong on the show part. I've confirmed that in new tests and now that I think about it, I WAS able to run show during the production incident as well. It was only stop that hung (I'll update the description accordingly). (I did see show hang in earlier tests but that was due to things being hung in a different way from various tests.)

For the stop in my tests...

1 @ 0x10032f118 0x1002f8b64 0x1002f8724 0x10128d288 0x101291f78 0x1014f0bc0 0x1014f0ba5 0x10192670c 0x101906f30 0x1008c9cc8 0x1008cdd40 0x1008c808c 0x100369854
#	0x10128d287	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*controller).Stop+0x47				vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/controller.go:325
#	0x101291f77	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*Engine).exec+0xcd7					vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/engine.go:452
#	0x1014f0bbf	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*Engine).Exec+0xb6f					vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/engine.go:348
#	0x1014f0ba4	vitess.io/vitess/go/vt/vttablet/tabletmanager.(*TabletManager).UpdateVReplicationWorkflow+0xb54			vitess.io/vitess/go/vt/vttablet/tabletmanager/rpc_vreplication.go:472
#	0x10192670b	vitess.io/vitess/go/vt/vttablet/grpctmserver.(*server).UpdateVReplicationWorkflow+0x11b				vitess.io/vitess/go/vt/vttablet/grpctmserver/server.go:423
#	0x101906f2f	vitess.io/vitess/go/vt/proto/tabletmanagerservice._TabletManager_UpdateVReplicationWorkflow_Handler+0x1bf	vitess.io/vitess/go/vt/proto/tabletmanagerservice/tabletmanagerservice_grpc.pb.go:1634
#	0x1008c9cc7	google.golang.org/grpc.(*Server).processUnaryRPC+0xb57								google.golang.org/[email protected]/server.go:1369
#	0x1008cdd3f	google.golang.org/grpc.(*Server).handleStream+0xb1f								google.golang.org/[email protected]/server.go:1780
#	0x1008c808b	google.golang.org/grpc.(*Server).serveStreams.func2.1+0x8b							google.golang.org/[email protected]/server.go:1019

That's here:

func (ct *controller) Stop() {
ct.cancel()
ct.blpStats.Stop()
<-ct.done
}

Which is waiting for run() to end as that done channel is closed in a defer there.

That goroutine in turn is blocked on this one:

1 @ 0x10032f118 0x1002f8b64 0x1002f8724 0x1012ba348 0x1012ba254 0x1012b9a1c 0x1012bd548 0x1012bcb58 0x10128c28c 0x10128b9a8 0x100369854
#	0x1012ba347	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).fetchAndApply.func4+0x37	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:304
#	0x1012ba253	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).fetchAndApply+0x573	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:309
#	0x1012b9a1b	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).play+0x31b		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:210
#	0x1012bd547	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vreplicator).replicate+0x5d7	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vreplicator.go:311
#	0x1012bcb57	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vreplicator).Replicate+0x27	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vreplicator.go:178
#	0x10128c28b	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*controller).runBlp+0x52b		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/controller.go:257
#	0x10128b9a7	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*controller).run+0x67		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/controller.go:153

Which is here at line 293 (line numbers are a little off as this was done in my PR branch):

case err := <-streamErr:
defer func() {
// cancel and wait for the other thread to finish.
cancel()
<-applyErr
}()

And the goroutine for that function:

1 @ 0x10032f118 0x100328948 0x100362b70 0x10040efc8 0x100410310 0x100410301 0x1004b0d98 0x1004c0a14 0x100542c14 0x1003a87c0 0x100dd6ff8 0x100dd6fd1 0x100dd7230 0x100dec9b8 0x100deb964 0x100deb588 0x100deb008 0x101232080 0x1012b8684 0x1012b8c74 0x1012b8c61 0x1012b96c4 0x1012baa28 0x10129bf5c 0x10129aa14 0x1012ba8f8 0x1012bc19c 0x1012bb5ac 0x1012ba394 0x100369854
#	0x100362b6f	internal/poll.runtime_pollWait+0x9f								runtime/netpoll.go:345
#	0x10040efc7	internal/poll.(*pollDesc).wait+0x27								internal/poll/fd_poll_runtime.go:84
#	0x10041030f	internal/poll.(*pollDesc).waitRead+0x1ff							internal/poll/fd_poll_runtime.go:89
#	0x100410300	internal/poll.(*FD).Read+0x1f0									internal/poll/fd_unix.go:164
#	0x1004b0d97	net.(*netFD).Read+0x27										net/fd_posix.go:55
#	0x1004c0a13	net.(*conn).Read+0x33										net/net.go:179
#	0x100542c13	bufio.(*Reader).Read+0x1b3									bufio/bufio.go:241
#	0x1003a87bf	io.ReadAtLeast+0x9f										io/io.go:335
#	0x100dd6ff7	io.ReadFull+0x47										io/io.go:354
#	0x100dd6fd0	vitess.io/vitess/go/mysql.(*Conn).readHeaderFrom+0x20						vitess.io/vitess/go/mysql/conn.go:389
#	0x100dd722f	vitess.io/vitess/go/mysql.(*Conn).readEphemeralPacket+0x6f					vitess.io/vitess/go/mysql/conn.go:427
#	0x100dec9b7	vitess.io/vitess/go/mysql.(*Conn).readComQueryResponse+0x37					vitess.io/vitess/go/mysql/query.go:544
#	0x100deb963	vitess.io/vitess/go/mysql.(*Conn).ReadQueryResult+0x53						vitess.io/vitess/go/mysql/query.go:396
#	0x100deb587	vitess.io/vitess/go/mysql.(*Conn).ExecuteFetchMulti+0x97					vitess.io/vitess/go/mysql/query.go:364
#	0x100deb007	vitess.io/vitess/go/mysql.(*Conn).ExecuteFetch+0x27						vitess.io/vitess/go/mysql/query.go:318
#	0x10123207f	vitess.io/vitess/go/vt/binlog/binlogplayer.(*dbClientImpl).ExecuteFetch+0x2f			vitess.io/vitess/go/vt/binlog/binlogplayer/dbclient.go:144
#	0x1012b8683	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vdbClient).ExecuteFetch+0x1b3	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vdbclient.go:123
#	0x1012b8c73	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vdbClient).Execute+0x43		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vdbclient.go:166
#	0x1012b8c60	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vdbClient).ExecuteWithRetry+0x30	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vdbclient.go:170
#	0x1012b96c3	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.newVPlayer.func1+0x43		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:128
#	0x1012baa27	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).applyRowEvent.func1+0xd7	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:348
#	0x10129bf5b	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.execParsedQuery+0x5b			vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/replicator_plan.go:561
#	0x10129aa13	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*TablePlan).applyChange+0x4a3	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/replicator_plan.go:412
#	0x1012ba8f7	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).applyRowEvent+0x1e7	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:373
#	0x1012bc19b	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).applyEvent+0x82b		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:665
#	0x1012bb5ab	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).applyEvents+0x64b		vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:547
#	0x1012ba393	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication.(*vplayer).fetchAndApply.func2+0x33	vitess.io/vitess/go/vt/vttablet/tabletmanager/vreplication/vplayer.go:281

Which is trying and retrying to commit the last relay log batch, cycling through queries like this:

| 460472 | vt_filtered     | localhost       | vt_commerce | Query            |      1 | updating                                                        | update _vt_vrp_a505f42c1f5611ef9043920702940ee1_20240531100407_ set updated_at='2024-05-31 10:05:38', created_at='2024-05-29 11:54:37', colstuff='foobar' where id=(convert('126062630024235_id' using utf8mb4)) and id2='186062188424265_id2' |

Each of which are doing a table scan so take a few seconds.

So it looks like we process the stop request and cancel the controller's context, which then causes the vstreamer to generate an io.EOF stream error which the vplayer gets and then it waits for the fetch and apply goroutine to end and close its error channel before the run can complete and close its done channel and the controller finally ends. I think that we're stuck down in the MySQL layer for some time (it DOES eventually stop) which does not check the context.

@mattlord mattlord added this to the v21.0.0 milestone Jun 4, 2024
@@ -404,6 +404,7 @@ Flags:
--v Level log level for V logs
-v, --version print binary version
--vmodule vModuleFlag comma-separated list of pattern=N settings for file-filtered logging
--vplayer-progress-deadline duration At what point, without having been able to successfully replicate a pending batch of events, should we consider the vplayer stalled; producing an error and log message and restarting the player. (default 5m0s)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this need to be a flag?
I ask because I would like us to be really mindful about introducing new flags. We have too many already.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest of the implementation LGTM

Copy link
Contributor Author

@mattlord mattlord Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't have to be a flag. I don't yet have clear guidance on when you might want to change the default value, only the feeling that such a scenario may come up and being able to use a different value would then be helpful. I could make the 5m deadline static...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make it static for now. I was also unable to imagine when you would want to change it.

@deepthi
Copy link
Member

deepthi commented Jun 14, 2024

@mattlord please go ahead and merge this after removing the flag.

@mattlord mattlord merged commit 6b0ca40 into vitessio:main Jun 14, 2024
93 checks passed
@mattlord mattlord deleted the vplayer_batch_trx_timeout branch June 14, 2024 17:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: VReplication Type: Enhancement Logical improvement (somewhere between a bug and feature)
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Bug Report: VPlayer does not detect stalls
4 participants